New Error Bounds for the Linear Complementarity Problem
نویسندگان
چکیده
منابع مشابه
New improved error bounds for the linear complementarity problem
New local and global error bounds are given for both nonmonotone and monotone linear complementarity problems. Comparisons of various residuals used in these error bounds are given. A possible candidate for a "best" error bound emerges from our comparisons as the sum of two natural residuals.
متن کاملComputation of Error Bounds for P-matrix Linear Complementarity Problems
We give new error bounds for the linear complementarity problem where the involved matrix is a P-matrix. Computation of rigorous error bounds can be turned into a P-matrix linear interval system. Moreover, for the involved matrix being an H-matrix with positive diagonals, an error bound can be found by solving a linear system of equations, which is sharper than the Mathias-Pang error bound. Pre...
متن کاملError bounds for symmetric cone complementarity problems
In this paper, we investigate the issue of error bounds for symmetric cone complementarity problems (SCCPs). In particular, we show that the distance between an arbitrary point in Euclidean Jordan algebra and the solution set of the symmetric cone complementarity problem can be bounded above by some merit functions such as FischerBurmeister merit function, the natural residual function and the ...
متن کاملA Quadratically Convergent Interior-Point Algorithm for the P*(κ)-Matrix Horizontal Linear Complementarity Problem
In this paper, we present a new path-following interior-point algorithm for -horizontal linear complementarity problems (HLCPs). The algorithm uses only full-Newton steps which has the advantage that no line searchs are needed. Moreover, we obtain the currently best known iteration bound for the algorithm with small-update method, namely, , which is as good as the linear analogue.
متن کاملNew Bounds and Approximations for the Error of Linear Classifiers
In this paper, we derive lower and upper bounds for the probability of error for a linear classifier, where the random vectors representing the underlying classes obey the multivariate normal distribution. The expression of the error is derived in the one-dimensional space, independently of the dimensionality of the original problem. Based on the two bounds, we propose an approximating expressi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematics of Operations Research
سال: 1994
ISSN: 0364-765X,1526-5471
DOI: 10.1287/moor.19.4.880